SlideShare a Scribd company logo
1 of 13
Special Matrices.   A band matrix is a square matrix in which all elements are zero, except for a band on the main diagonal. A tridiagonal system (ie with a bandwidth of 3) can be expressed generally as: <br /> <br />f0g0   x0 b0e1f1g1  x1=b1 e2f2g2 x2 b2  e3f3 x3 b3<br /> <br />Based on LU decomposition we can see that the Thomas algorithm is: <br /> <br /> <br /> <br />The forward substitution is <br /> <br /> <br />and back is: <br /> <br /> <br /> Example:   Solve the following tridiagonal system using the Thomas algorithm. <br />2.04-1   x0 40.8-12.04-1  x1=0.8 -12.04-1 x2 0.8  -12.04 x3 200.8<br /> <br /> The solution of the triangular decomposition is:   <br /> <br />2.04-1  -0.491.55-1  -0.6451.395-1  -0.7171.323<br /> <br /> <br />The solution of the system is: <br /> <br />X = [65.970, 93.778, 124.538, 159.480]T<br /> <br />Descomposición de Cholesky.<br /> This algorithm is based on the fact that a symmetric matrix can be decomposed into [A] = [L] [L] T, since the matrix [A] is a symmetric matrix. In this case we will apply the Crout elimination of the lower and upper matrix, simply have the same values.   So taking the equations for the LU factorization can be adapted as: We can see that any element under the diagonal is calculated as:<br />para todo i=0,...,n-1 y j = 0,...,i-1.<br /> <br /> For the terms above the diagonal, in this case only the diagonal<br />para todo i=0,...,n-1.<br /> <br /> The Java implementation is: <br />  static public void Cholesky(double A[][]) {<br />    int i, j, k, n, s;<br />    double fact, suma = 0;<br /> <br />    n = A.length;<br /> <br />    for (i = 0; i < n; i++) { //k = i<br />      for (j = 0; j <= i - 1; j++) { //i = j<br />        suma = 0;<br />        for (k = 0; k <= j - 1; k++) // j = k<br />          suma += A[i][k] * A[j][k];<br /> <br />        A[i][j] = (A[i][j] - suma) / A[j][j];<br />      }<br /> <br />      suma = 0;<br />      for (k = 0; k <= i - 1; k++)<br />        suma += A[i][k] * A[i][k];<br />      A[i][i] = Math.sqrt(A[i][i] - suma);<br />    }<br />  }<br /> Jacobi Method<br /> In numerical analysis method of Jacobi is an iterative method, used for solving systems of linear equations Ax = b. Type The algorithm is named after the German mathematician Carl Gustav Jakob Jacobi. • <br />Description <br />The basis of the method is to construct a convergent sequence defined iteratively. The limit of this sequence is precisely the solution of the system. For practical purposes if the algorithm stops after a finite number of steps leads to an approximation of the value of x in the solution of the system. The sequence is constructed by decomposing the system matrix as follows: <br />where<br />, Is a diagonal matrix<br />, Is a lower triangular matrix.<br />, Is an upper triangular matrix. <br />Starting from,, we can rewrite this equation as:   <br />Then <br /> If aii ≠ 0 for each i. For the iterative rule, the definition of the Jacobi method can be expressed as: <br />where k is the iteration counter, finally we have:   <br />Note that the calculation of xi (k +1) requires all elements in x (k), except the one with the same i. Therefore, unlike in the Gauss-Seidel method, you can not overwrite xi (k) xi (k +1), since its value will be for the remainder of the calculations. This is the most significant difference between the methods of Jacobi and Gauss-Seidel. The minimum amount of storage is two vectors of dimension n, and will need to make an explicit copy. <br />Convergence <br />Jacobi's method always converges if the matrix A is strictly diagonally dominant and can converge even if this condition is not satisfied. It is necessary, however, that the diagonal elements in the matrix are greater (in magnitude) than the other elements.<br /> Algorithm<br /> The Jacobi method can be written in the form of an algorithm as follows: <br />Algoritmo Método de Jacobifunción Jacobi (A, x0)//x0 es una aproximación inicial a la solución//para hasta convergencia hacer para hasta hacer para hasta hacer si entonces fin parafin paracomprobar si se alcanza convergenciafin para<br />algorithm in java <br />public class Jacobi {<br />double [][]matriz={{4,-2,1},{1,-5,3},{2,1,4}};<br />double []vector={2,1,3};<br />double []vectorR={1,2,3};<br />double []x2=vectorR;<br />double sumatoria=1;<br />int max=50;<br /> public void SolJacobi(){<br />  int tam = matriz.length;<br />for (int y = 0; y < 10; y++) {<br />    system.outtt.println(quot;
vector quot;
 + y + quot;
quot;
);<br /> for(int t=0;t>max;t++){<br />   x2=vectorR.clone();<br /> for (int i = 0; i < tam; i++) {<br />   sumatoria=0;<br /> for (int s = 0; s < tam; s++) {<br />   if(s!=i)sumatoria += matriz[i][s]*x2[s];<br />}<br />vectorR[i]=(vector[i]-sumatoria)/matriz[i][i];<br />  System.out.print(quot;
 quot;
 + vectorR[i]);<br />}<br />}<br />   <br />}<br />}   <br />public static void main(String[] args) {<br /> jacobi obj=new Jacobi();<br />  obj.SolJacobi();<br />}<br />}<br />Gauss-Seidel Method <br />In numerical analysis method is a Gauss-Seidel iterative method used to solve systems of linear equations. The method is named in honor of the German mathematicians Carl Friedrich Gauss and Philipp Ludwig von Seidel and is similar to the method of Jacobi. <br />Description <br />It is an iterative method, which means that part of an initial approximation and the process is repeated until a solution with a margin of error as small as desired. We seek the solution to a system of linear equations, in matrix notation:   <br />The method of Gauss-Seidel iteration<br />where<br />para i=j, o para .<br />and<br />This is alsoque :<br />If<br />define<br />and<br />.<br />Whereas the system Ax = b, with the proviso that, i = 1, ..., n. Then we can write the iteration formula of the method <br />, i=1,...,n(*)<br />The difference between this method and Jacobi is that, in the latter, improvements to conventional approaches are not used to complete the iterations. <br />Convergence Theorem: Suppose a matrix  is a nonsingular matrix that satisfies the condition  or. ó .Then the Gauss-Seidel method converges to a solution of the system of equations Ax = b, and the convergence is at least as fast as the convergence of Jacobi method. <br />For cases where the method converges show first that can be written as follows:  <br />(**)<br />(The term  is the approximation obtained after the k-th iteration) this way of writing the iteration is the general form of a stationary iterative method. <br />First we must show that we want to solve the linear problem can be represented in the form (**), for this reason we must try to write the matrix as the sum of a lower triangular matrix, a diagonal and upper triangular A = D ( L + I + U), D = diag (). Making the necessary clearances write the method this way <br />hence B =- (L + I) -1 U. Now we can see that the relation between errors, which can be calculated to subtract x = Bx + c (**) <br />Now suppose that  , i= 1, ..., n, are the eigenvalues corresponding to eigenvectors ui, i = 1 ,..., n, which are linearly independent, then we can write the initial error   <br />(***)<br />Therefore, the iteration converges if and only if | λi | <1, i = 1, ..., n. From this it follows the following theorem:<br />Theorem: A necessary and sufficient condition for a stationary iterative method  converges for an arbitrary approximation x ^ ((0)) is that   where ρ (B) is the spectral radius of B.<br />Explanation <br />We choose an initial approximation. M matrices are calculated and the vector c with the formulas mentioned. The process is repeated until xk is sufficiently close to xk - 1, where k represents the number of steps in the iteration. <br />Algorithm<br />The Gauss-Seidel can be written algorithm as follows:<br />Algoritmo Método de Gauss-Seidelfunción Gauss-Seidel (A, x0)//x0 es una aproximación inicial a la solución//para hasta convergencia hacer para hasta hacer para hasta hacer si entonces σ = σ + aijxjfin parafin paracomprobar si se alcanza convergenciafin para<br />EXAMPLE JACOBI AND GAUSS SEIDEL METHOD <br />Two numerical methods, which allows us to find solutions to systems with the same number of equations than unknowns. <br />In both methods the following process is done with a little variation on Gauss-Seidel <br />We have these equations: <br />5x-2y+z=3-x-7y+3z=-22x-y+8z=1<br />1. Solve each of the unknowns in terms of the others<br />x=(3+2y-z)/5y=(x-3z-2)/-7z=(1-2x+y)/8<br />2. Give initial values to the unknowns<br />x1=0y1=0z1=0<br />By Jacobi: Replace in each equation the initial values, this will give new values to be used in the next iteration <br />x=(3+2*0-0)/5=0,60y=(0-3*0-2)/-7=0,28z=(1-2x+y)/8=0,12<br />Gauss-Seidel Replace the values in each equation but found next. <br />x=(3+2*0-0)/5=0,6y=(0,6-3*0-2)/-7=0,2z=(1-2*0,6+0,2)/8=0<br />It performs many iterations you want, using as initial values the new values found. You can stop the execution of the algorithm to calculate the error of calculation, which we can find with this formula: sqr ((x1-x0) ^ 2 + (y1-y0) ^ 2 + (z1-z0) ^ 2)<br /> jacobi<br /> Gauss-Seidel<br />The main difference is that the method of gauss_seidel uses the values found immediately, then makes the whole process faster, and consequently makes this a more effective method. <br />The formulas used in the excel sheet for the method of Jacobi is <br />=(3+2*D5-E5)/5=(C5-3*E5-2)/-7=(1-2*C5+D5)/8=RAIZ((C6-C5)^2 + (D6-D5)^2 + (E6-E5)^2)<br />Corresponding to the variable X, Y, Z and failure respectively<br />To the Gauss-Seidel:<br />=(3+2*J5-K5)/5=(I6-3*K5-2)/-7=(1-2*I6+J6)/8=RAIZ((I6-I5)^2 + (J6-J5)^2 + (K6-K5)^2)<br />
Metodos jacobi y gauss seidel
Metodos jacobi y gauss seidel
Metodos jacobi y gauss seidel
Metodos jacobi y gauss seidel
Metodos jacobi y gauss seidel
Metodos jacobi y gauss seidel
Metodos jacobi y gauss seidel
Metodos jacobi y gauss seidel
Metodos jacobi y gauss seidel
Metodos jacobi y gauss seidel
Metodos jacobi y gauss seidel
Metodos jacobi y gauss seidel

More Related Content

What's hot

02 linear algebra
02 linear algebra02 linear algebra
02 linear algebraRonald Teo
 
Integration presentation
Integration presentationIntegration presentation
Integration presentationUrmila Bhardwaj
 
Metodos jacobi y gauss seidel
Metodos jacobi y gauss seidelMetodos jacobi y gauss seidel
Metodos jacobi y gauss seidelCesar Mendoza
 
Indefinite Integral
Indefinite IntegralIndefinite Integral
Indefinite IntegralJelaiAujero
 
Probability Formula sheet
Probability Formula sheetProbability Formula sheet
Probability Formula sheetHaris Hassan
 
Lesson 5: Matrix Algebra (slides)
Lesson 5: Matrix Algebra (slides)Lesson 5: Matrix Algebra (slides)
Lesson 5: Matrix Algebra (slides)Matthew Leingang
 
44 randomized-algorithms
44 randomized-algorithms44 randomized-algorithms
44 randomized-algorithmsAjitSaraf1
 
Cap 2_Nouredine Zettili - Quantum Mechanics.pdf
Cap 2_Nouredine Zettili - Quantum Mechanics.pdfCap 2_Nouredine Zettili - Quantum Mechanics.pdf
Cap 2_Nouredine Zettili - Quantum Mechanics.pdfOñatnom Adnara
 
Numerical Methods
Numerical MethodsNumerical Methods
Numerical MethodsESUG
 
On the lambert w function
On the lambert w functionOn the lambert w function
On the lambert w functionTrungKienVu3
 
013_20160328_Topological_Measurement_Of_Protein_Compressibility
013_20160328_Topological_Measurement_Of_Protein_Compressibility013_20160328_Topological_Measurement_Of_Protein_Compressibility
013_20160328_Topological_Measurement_Of_Protein_CompressibilityHa Phuong
 

What's hot (20)

02 linear algebra
02 linear algebra02 linear algebra
02 linear algebra
 
5.1 greedy
5.1 greedy5.1 greedy
5.1 greedy
 
Randomized algorithms ver 1.0
Randomized algorithms ver 1.0Randomized algorithms ver 1.0
Randomized algorithms ver 1.0
 
Integration presentation
Integration presentationIntegration presentation
Integration presentation
 
Metodos jacobi y gauss seidel
Metodos jacobi y gauss seidelMetodos jacobi y gauss seidel
Metodos jacobi y gauss seidel
 
Indefinite Integral
Indefinite IntegralIndefinite Integral
Indefinite Integral
 
algorithm Unit 3
algorithm Unit 3algorithm Unit 3
algorithm Unit 3
 
algorithm unit 1
algorithm unit 1algorithm unit 1
algorithm unit 1
 
algorithm Unit 2
algorithm Unit 2 algorithm Unit 2
algorithm Unit 2
 
Probability Formula sheet
Probability Formula sheetProbability Formula sheet
Probability Formula sheet
 
Lesson 5: Matrix Algebra (slides)
Lesson 5: Matrix Algebra (slides)Lesson 5: Matrix Algebra (slides)
Lesson 5: Matrix Algebra (slides)
 
Numerical Methods 1
Numerical Methods 1Numerical Methods 1
Numerical Methods 1
 
Chapter 4 logic design
Chapter 4   logic designChapter 4   logic design
Chapter 4 logic design
 
44 randomized-algorithms
44 randomized-algorithms44 randomized-algorithms
44 randomized-algorithms
 
Cap 2_Nouredine Zettili - Quantum Mechanics.pdf
Cap 2_Nouredine Zettili - Quantum Mechanics.pdfCap 2_Nouredine Zettili - Quantum Mechanics.pdf
Cap 2_Nouredine Zettili - Quantum Mechanics.pdf
 
Math cbse samplepaper
Math cbse samplepaperMath cbse samplepaper
Math cbse samplepaper
 
Numerical Methods
Numerical MethodsNumerical Methods
Numerical Methods
 
On the lambert w function
On the lambert w functionOn the lambert w function
On the lambert w function
 
algorithm Unit 5
algorithm Unit 5 algorithm Unit 5
algorithm Unit 5
 
013_20160328_Topological_Measurement_Of_Protein_Compressibility
013_20160328_Topological_Measurement_Of_Protein_Compressibility013_20160328_Topological_Measurement_Of_Protein_Compressibility
013_20160328_Topological_Measurement_Of_Protein_Compressibility
 

Similar to Metodos jacobi y gauss seidel

Jacobi iterative method
Jacobi iterative methodJacobi iterative method
Jacobi iterative methodLuckshay Batra
 
Scilab for real dummies j.heikell - part 2
Scilab for real dummies j.heikell - part 2Scilab for real dummies j.heikell - part 2
Scilab for real dummies j.heikell - part 2Scilab
 
SAMPLE QUESTIONExercise 1 Consider the functionf (x,C).docx
SAMPLE QUESTIONExercise 1 Consider the functionf (x,C).docxSAMPLE QUESTIONExercise 1 Consider the functionf (x,C).docx
SAMPLE QUESTIONExercise 1 Consider the functionf (x,C).docxanhlodge
 
INTRODUCTION TO MATLAB presentation.pptx
INTRODUCTION TO MATLAB presentation.pptxINTRODUCTION TO MATLAB presentation.pptx
INTRODUCTION TO MATLAB presentation.pptxDevaraj Chilakala
 
Gauss jordan and Guass elimination method
Gauss jordan and Guass elimination methodGauss jordan and Guass elimination method
Gauss jordan and Guass elimination methodMeet Nayak
 
Iterative methods
Iterative methodsIterative methods
Iterative methodsKt Silva
 
Efficient Solution of Two-Stage Stochastic Linear Programs Using Interior Poi...
Efficient Solution of Two-Stage Stochastic Linear Programs Using Interior Poi...Efficient Solution of Two-Stage Stochastic Linear Programs Using Interior Poi...
Efficient Solution of Two-Stage Stochastic Linear Programs Using Interior Poi...SSA KPI
 
Quantum algorithm for solving linear systems of equations
 Quantum algorithm for solving linear systems of equations Quantum algorithm for solving linear systems of equations
Quantum algorithm for solving linear systems of equationsXequeMateShannon
 
A study on number theory and its applications
A study on number theory and its applicationsA study on number theory and its applications
A study on number theory and its applicationsItishree Dash
 
Differential Equations Homework Help
Differential Equations Homework HelpDifferential Equations Homework Help
Differential Equations Homework HelpMath Homework Solver
 
Analytic Geometry Period 1
Analytic Geometry Period 1Analytic Geometry Period 1
Analytic Geometry Period 1ingroy
 
Dynamic1
Dynamic1Dynamic1
Dynamic1MyAlome
 

Similar to Metodos jacobi y gauss seidel (20)

Chapter 5
Chapter 5Chapter 5
Chapter 5
 
Jacobi iterative method
Jacobi iterative methodJacobi iterative method
Jacobi iterative method
 
Scilab for real dummies j.heikell - part 2
Scilab for real dummies j.heikell - part 2Scilab for real dummies j.heikell - part 2
Scilab for real dummies j.heikell - part 2
 
SAMPLE QUESTIONExercise 1 Consider the functionf (x,C).docx
SAMPLE QUESTIONExercise 1 Consider the functionf (x,C).docxSAMPLE QUESTIONExercise 1 Consider the functionf (x,C).docx
SAMPLE QUESTIONExercise 1 Consider the functionf (x,C).docx
 
Parallel algorithm in linear algebra
Parallel algorithm in linear algebraParallel algorithm in linear algebra
Parallel algorithm in linear algebra
 
INTRODUCTION TO MATLAB presentation.pptx
INTRODUCTION TO MATLAB presentation.pptxINTRODUCTION TO MATLAB presentation.pptx
INTRODUCTION TO MATLAB presentation.pptx
 
Coueete project
Coueete projectCoueete project
Coueete project
 
Gauss jordan and Guass elimination method
Gauss jordan and Guass elimination methodGauss jordan and Guass elimination method
Gauss jordan and Guass elimination method
 
Signals and Systems Homework Help.pptx
Signals and Systems Homework Help.pptxSignals and Systems Homework Help.pptx
Signals and Systems Homework Help.pptx
 
Daa chapter11
Daa chapter11Daa chapter11
Daa chapter11
 
Iterative methods
Iterative methodsIterative methods
Iterative methods
 
Ijetr021210
Ijetr021210Ijetr021210
Ijetr021210
 
Ijetr021210
Ijetr021210Ijetr021210
Ijetr021210
 
Efficient Solution of Two-Stage Stochastic Linear Programs Using Interior Poi...
Efficient Solution of Two-Stage Stochastic Linear Programs Using Interior Poi...Efficient Solution of Two-Stage Stochastic Linear Programs Using Interior Poi...
Efficient Solution of Two-Stage Stochastic Linear Programs Using Interior Poi...
 
2.ppt
2.ppt2.ppt
2.ppt
 
Quantum algorithm for solving linear systems of equations
 Quantum algorithm for solving linear systems of equations Quantum algorithm for solving linear systems of equations
Quantum algorithm for solving linear systems of equations
 
A study on number theory and its applications
A study on number theory and its applicationsA study on number theory and its applications
A study on number theory and its applications
 
Differential Equations Homework Help
Differential Equations Homework HelpDifferential Equations Homework Help
Differential Equations Homework Help
 
Analytic Geometry Period 1
Analytic Geometry Period 1Analytic Geometry Period 1
Analytic Geometry Period 1
 
Dynamic1
Dynamic1Dynamic1
Dynamic1
 

More from Cesar Mendoza

Roots of polynomials
Roots of polynomialsRoots of polynomials
Roots of polynomialsCesar Mendoza
 
System of linear equations
System of linear equationsSystem of linear equations
System of linear equationsCesar Mendoza
 
System of linear equations
System of linear equationsSystem of linear equations
System of linear equationsCesar Mendoza
 
Mtodositerativosgauss seidelconrelajacin-100720193455-phpapp01
Mtodositerativosgauss seidelconrelajacin-100720193455-phpapp01Mtodositerativosgauss seidelconrelajacin-100720193455-phpapp01
Mtodositerativosgauss seidelconrelajacin-100720193455-phpapp01Cesar Mendoza
 
METODO JACOBI Y GAUSS SEIDEL
METODO JACOBI Y GAUSS SEIDELMETODO JACOBI Y GAUSS SEIDEL
METODO JACOBI Y GAUSS SEIDELCesar Mendoza
 
Metodos jacobi y gauss seidel
Metodos jacobi y gauss seidelMetodos jacobi y gauss seidel
Metodos jacobi y gauss seidelCesar Mendoza
 
Métodos directos para solución de sistemas ecuaciones lineales (2)
Métodos directos para solución de sistemas ecuaciones lineales (2)Métodos directos para solución de sistemas ecuaciones lineales (2)
Métodos directos para solución de sistemas ecuaciones lineales (2)Cesar Mendoza
 
Métodos directos para solución de sistemas ecuaciones lineales
Métodos directos para solución de sistemas ecuaciones linealesMétodos directos para solución de sistemas ecuaciones lineales
Métodos directos para solución de sistemas ecuaciones linealesCesar Mendoza
 
Métodos directos para solución de sistemas ecuaciones lineales
Métodos directos para solución de sistemas ecuaciones linealesMétodos directos para solución de sistemas ecuaciones lineales
Métodos directos para solución de sistemas ecuaciones linealesCesar Mendoza
 

More from Cesar Mendoza (13)

Roots of polynomials
Roots of polynomialsRoots of polynomials
Roots of polynomials
 
System of linear equations
System of linear equationsSystem of linear equations
System of linear equations
 
System of linear equations
System of linear equationsSystem of linear equations
System of linear equations
 
Metodos iterativos
Metodos iterativosMetodos iterativos
Metodos iterativos
 
Metodos iterativos
Metodos iterativosMetodos iterativos
Metodos iterativos
 
Mtodositerativosgauss seidelconrelajacin-100720193455-phpapp01
Mtodositerativosgauss seidelconrelajacin-100720193455-phpapp01Mtodositerativosgauss seidelconrelajacin-100720193455-phpapp01
Mtodositerativosgauss seidelconrelajacin-100720193455-phpapp01
 
METODO JACOBI Y GAUSS SEIDEL
METODO JACOBI Y GAUSS SEIDELMETODO JACOBI Y GAUSS SEIDEL
METODO JACOBI Y GAUSS SEIDEL
 
Metodos jacobi y gauss seidel
Metodos jacobi y gauss seidelMetodos jacobi y gauss seidel
Metodos jacobi y gauss seidel
 
Metodo gauss seidel
Metodo gauss seidelMetodo gauss seidel
Metodo gauss seidel
 
Matrices
MatricesMatrices
Matrices
 
Métodos directos para solución de sistemas ecuaciones lineales (2)
Métodos directos para solución de sistemas ecuaciones lineales (2)Métodos directos para solución de sistemas ecuaciones lineales (2)
Métodos directos para solución de sistemas ecuaciones lineales (2)
 
Métodos directos para solución de sistemas ecuaciones lineales
Métodos directos para solución de sistemas ecuaciones linealesMétodos directos para solución de sistemas ecuaciones lineales
Métodos directos para solución de sistemas ecuaciones lineales
 
Métodos directos para solución de sistemas ecuaciones lineales
Métodos directos para solución de sistemas ecuaciones linealesMétodos directos para solución de sistemas ecuaciones lineales
Métodos directos para solución de sistemas ecuaciones lineales
 

Metodos jacobi y gauss seidel

  • 1. Special Matrices.   A band matrix is a square matrix in which all elements are zero, except for a band on the main diagonal. A tridiagonal system (ie with a bandwidth of 3) can be expressed generally as: <br /> <br />f0g0   x0 b0e1f1g1  x1=b1 e2f2g2 x2 b2  e3f3 x3 b3<br /> <br />Based on LU decomposition we can see that the Thomas algorithm is: <br /> <br /> <br /> <br />The forward substitution is <br /> <br /> <br />and back is: <br /> <br /> <br /> Example:   Solve the following tridiagonal system using the Thomas algorithm. <br />2.04-1   x0 40.8-12.04-1  x1=0.8 -12.04-1 x2 0.8  -12.04 x3 200.8<br /> <br /> The solution of the triangular decomposition is:   <br /> <br />2.04-1  -0.491.55-1  -0.6451.395-1  -0.7171.323<br /> <br /> <br />The solution of the system is: <br /> <br />X = [65.970, 93.778, 124.538, 159.480]T<br /> <br />Descomposición de Cholesky.<br /> This algorithm is based on the fact that a symmetric matrix can be decomposed into [A] = [L] [L] T, since the matrix [A] is a symmetric matrix. In this case we will apply the Crout elimination of the lower and upper matrix, simply have the same values.   So taking the equations for the LU factorization can be adapted as: We can see that any element under the diagonal is calculated as:<br />para todo i=0,...,n-1 y j = 0,...,i-1.<br /> <br /> For the terms above the diagonal, in this case only the diagonal<br />para todo i=0,...,n-1.<br /> <br /> The Java implementation is: <br /> static public void Cholesky(double A[][]) {<br /> int i, j, k, n, s;<br /> double fact, suma = 0;<br /> <br /> n = A.length;<br /> <br /> for (i = 0; i < n; i++) { //k = i<br /> for (j = 0; j <= i - 1; j++) { //i = j<br /> suma = 0;<br /> for (k = 0; k <= j - 1; k++) // j = k<br /> suma += A[i][k] * A[j][k];<br /> <br /> A[i][j] = (A[i][j] - suma) / A[j][j];<br /> }<br /> <br /> suma = 0;<br /> for (k = 0; k <= i - 1; k++)<br /> suma += A[i][k] * A[i][k];<br /> A[i][i] = Math.sqrt(A[i][i] - suma);<br /> }<br /> }<br /> Jacobi Method<br /> In numerical analysis method of Jacobi is an iterative method, used for solving systems of linear equations Ax = b. Type The algorithm is named after the German mathematician Carl Gustav Jakob Jacobi. • <br />Description <br />The basis of the method is to construct a convergent sequence defined iteratively. The limit of this sequence is precisely the solution of the system. For practical purposes if the algorithm stops after a finite number of steps leads to an approximation of the value of x in the solution of the system. The sequence is constructed by decomposing the system matrix as follows: <br />where<br />, Is a diagonal matrix<br />, Is a lower triangular matrix.<br />, Is an upper triangular matrix. <br />Starting from,, we can rewrite this equation as:   <br />Then <br /> If aii ≠ 0 for each i. For the iterative rule, the definition of the Jacobi method can be expressed as: <br />where k is the iteration counter, finally we have:   <br />Note that the calculation of xi (k +1) requires all elements in x (k), except the one with the same i. Therefore, unlike in the Gauss-Seidel method, you can not overwrite xi (k) xi (k +1), since its value will be for the remainder of the calculations. This is the most significant difference between the methods of Jacobi and Gauss-Seidel. The minimum amount of storage is two vectors of dimension n, and will need to make an explicit copy. <br />Convergence <br />Jacobi's method always converges if the matrix A is strictly diagonally dominant and can converge even if this condition is not satisfied. It is necessary, however, that the diagonal elements in the matrix are greater (in magnitude) than the other elements.<br /> Algorithm<br /> The Jacobi method can be written in the form of an algorithm as follows: <br />Algoritmo Método de Jacobifunción Jacobi (A, x0)//x0 es una aproximación inicial a la solución//para hasta convergencia hacer para hasta hacer para hasta hacer si entonces fin parafin paracomprobar si se alcanza convergenciafin para<br />algorithm in java <br />public class Jacobi {<br />double [][]matriz={{4,-2,1},{1,-5,3},{2,1,4}};<br />double []vector={2,1,3};<br />double []vectorR={1,2,3};<br />double []x2=vectorR;<br />double sumatoria=1;<br />int max=50;<br /> public void SolJacobi(){<br /> int tam = matriz.length;<br />for (int y = 0; y < 10; y++) {<br /> system.outtt.println(quot; vector quot; + y + quot; quot; );<br /> for(int t=0;t>max;t++){<br /> x2=vectorR.clone();<br /> for (int i = 0; i < tam; i++) {<br /> sumatoria=0;<br /> for (int s = 0; s < tam; s++) {<br /> if(s!=i)sumatoria += matriz[i][s]*x2[s];<br />}<br />vectorR[i]=(vector[i]-sumatoria)/matriz[i][i];<br /> System.out.print(quot; quot; + vectorR[i]);<br />}<br />}<br /> <br />}<br />} <br />public static void main(String[] args) {<br /> jacobi obj=new Jacobi();<br /> obj.SolJacobi();<br />}<br />}<br />Gauss-Seidel Method <br />In numerical analysis method is a Gauss-Seidel iterative method used to solve systems of linear equations. The method is named in honor of the German mathematicians Carl Friedrich Gauss and Philipp Ludwig von Seidel and is similar to the method of Jacobi. <br />Description <br />It is an iterative method, which means that part of an initial approximation and the process is repeated until a solution with a margin of error as small as desired. We seek the solution to a system of linear equations, in matrix notation:   <br />The method of Gauss-Seidel iteration<br />where<br />para i=j, o para .<br />and<br />This is alsoque :<br />If<br />define<br />and<br />.<br />Whereas the system Ax = b, with the proviso that, i = 1, ..., n. Then we can write the iteration formula of the method <br />, i=1,...,n(*)<br />The difference between this method and Jacobi is that, in the latter, improvements to conventional approaches are not used to complete the iterations. <br />Convergence Theorem: Suppose a matrix is a nonsingular matrix that satisfies the condition  or. ó .Then the Gauss-Seidel method converges to a solution of the system of equations Ax = b, and the convergence is at least as fast as the convergence of Jacobi method. <br />For cases where the method converges show first that can be written as follows:  <br />(**)<br />(The term is the approximation obtained after the k-th iteration) this way of writing the iteration is the general form of a stationary iterative method. <br />First we must show that we want to solve the linear problem can be represented in the form (**), for this reason we must try to write the matrix as the sum of a lower triangular matrix, a diagonal and upper triangular A = D ( L + I + U), D = diag (). Making the necessary clearances write the method this way <br />hence B =- (L + I) -1 U. Now we can see that the relation between errors, which can be calculated to subtract x = Bx + c (**) <br />Now suppose that , i= 1, ..., n, are the eigenvalues corresponding to eigenvectors ui, i = 1 ,..., n, which are linearly independent, then we can write the initial error   <br />(***)<br />Therefore, the iteration converges if and only if | λi | <1, i = 1, ..., n. From this it follows the following theorem:<br />Theorem: A necessary and sufficient condition for a stationary iterative method converges for an arbitrary approximation x ^ ((0)) is that   where ρ (B) is the spectral radius of B.<br />Explanation <br />We choose an initial approximation. M matrices are calculated and the vector c with the formulas mentioned. The process is repeated until xk is sufficiently close to xk - 1, where k represents the number of steps in the iteration. <br />Algorithm<br />The Gauss-Seidel can be written algorithm as follows:<br />Algoritmo Método de Gauss-Seidelfunción Gauss-Seidel (A, x0)//x0 es una aproximación inicial a la solución//para hasta convergencia hacer para hasta hacer para hasta hacer si entonces σ = σ + aijxjfin parafin paracomprobar si se alcanza convergenciafin para<br />EXAMPLE JACOBI AND GAUSS SEIDEL METHOD <br />Two numerical methods, which allows us to find solutions to systems with the same number of equations than unknowns. <br />In both methods the following process is done with a little variation on Gauss-Seidel <br />We have these equations: <br />5x-2y+z=3-x-7y+3z=-22x-y+8z=1<br />1. Solve each of the unknowns in terms of the others<br />x=(3+2y-z)/5y=(x-3z-2)/-7z=(1-2x+y)/8<br />2. Give initial values to the unknowns<br />x1=0y1=0z1=0<br />By Jacobi: Replace in each equation the initial values, this will give new values to be used in the next iteration <br />x=(3+2*0-0)/5=0,60y=(0-3*0-2)/-7=0,28z=(1-2x+y)/8=0,12<br />Gauss-Seidel Replace the values in each equation but found next. <br />x=(3+2*0-0)/5=0,6y=(0,6-3*0-2)/-7=0,2z=(1-2*0,6+0,2)/8=0<br />It performs many iterations you want, using as initial values the new values found. You can stop the execution of the algorithm to calculate the error of calculation, which we can find with this formula: sqr ((x1-x0) ^ 2 + (y1-y0) ^ 2 + (z1-z0) ^ 2)<br /> jacobi<br /> Gauss-Seidel<br />The main difference is that the method of gauss_seidel uses the values found immediately, then makes the whole process faster, and consequently makes this a more effective method. <br />The formulas used in the excel sheet for the method of Jacobi is <br />=(3+2*D5-E5)/5=(C5-3*E5-2)/-7=(1-2*C5+D5)/8=RAIZ((C6-C5)^2 + (D6-D5)^2 + (E6-E5)^2)<br />Corresponding to the variable X, Y, Z and failure respectively<br />To the Gauss-Seidel:<br />=(3+2*J5-K5)/5=(I6-3*K5-2)/-7=(1-2*I6+J6)/8=RAIZ((I6-I5)^2 + (J6-J5)^2 + (K6-K5)^2)<br />